5 research outputs found

    Edge enhanced deep learning system for IoT edge device security analytics

    Full text link
    The processing of locally harvested data at the physically accessible edge devices opens a new avenue of security threats for edge enhanced analytics. Cryptographic algorithms are used to secure the data being processed on the edge device. However, the implementation weakness of the algorithms on the edge devices can lead to side-channel attack vulnerability, which is exacerbated with the application of machine-learning techniques. This research proposes a deep learning-based system integrated at the edge device to identify the side-channel leakages. To design such a deep learning-based system, one of the challenges is formulating the suitable attack model for the underlying target algorithm. Based on the previous findings, three machine learning-based side-channel attack models are curated and investigated for the edge device security evaluations. As a test case, the standard elliptic-curve cryptographic algorithm is selected. Moreover, quantitative analysis is provided for the best attack model selection using standard machine-learning evaluation metrics. A comparative analysis is performed on the raw unaligned data samples and reduced feature-engineered samples using edge enhanced security analytics. The investigation concludes that the vulnerable algorithm implementation can lead to the secret key recovery from the edge device, with 96% accuracy, using a neural-network-based algorithm to analyse side-channel attacks

    Explaining deep neural networks: A survey on the global interpretation methods

    No full text
    A substantial amount of research has been carried out in Explainable Artificial Intelligence (XAI) models, especially in those which explain the deep architectures of neural networks. A number of XAI approaches have been proposed to achieve trust in Artificial Intelligence (AI) models as well as provide explainability of specific decisions made within these models. Among these approaches, global interpretation methods have emerged as the prominent methods of explainability because they have the strength to explain every feature and the structure of the model. This survey attempts to provide a comprehensive review of global interpretation methods that completely explain the behaviour of the AI models. We present a taxonomy of the available global interpretations models and systematically highlight the critical features and algorithms that differentiate them from local as well as hybrid models of explainability. Through examples and case studies from the literature, we evaluate the strengths and weaknesses of the global interpretation models and assess challenges when these methods are put into practice. We conclude the paper by providing the future directions of research in how the existing challenges in global interpretation methods could be addressed and what values and opportunities could be realized by the resolution of these challenges.</p

    Optimization of service addition in multilevel index model for edge computing

    Full text link
    With the development of edge computing and artificial intelligence (AI) technologies, edge devices are witnessed to generate data at unprecedented volume. The edge intelligence (EI) has led to the emergence of edge devices in various application domains. The EI can provide efficient services to delay-sensitive applications, where the edge devices are deployed as edge nodes to host the majority of execution, which can effectively manage services and improve service discovery efficiency. The multilevel index model is a well-known model used for indexing service, such a model is being introduced and optimized in the edge environments to efficiently services discovery while managing large volumes of data. However, effectively updating the multilevel index model by adding new services timely and precisely in the dynamic edge computing environments is still a challenge. Addressing this issue, this article proposes a designated key selection method to improve the efficiency of adding services in the multilevel index models. Our experimental results show that in the partial index and the full index of multilevel index model, our method reduces the service addition time by around 84% and 76%, respectively when compared with the original key selection method and by around 78% and 66%, respectively when compared with the random selection method. Our proposed method significantly improves the service addition efficiency in the multilevel index model, when compared with existing state-of-the-art key selection methods, without compromising the service retrieval stability to any notable level

    The least-used key selection method for information retrieval in large-scale Cloud-based service repositories

    No full text
    As the number of devices connected to the Internet of Things (IoT) increases significantly, it leads to an exponential growth in the number of services that need to be processed and stored in the large-scale Cloud-based service repositories. An efficient service indexing model is critical for service retrieval and management of large-scale Cloud-based service repositories. The multilevel index model is the state-of-art service indexing model in recent years to improve service discovery and combination. This paper aims to optimize the model to consider the impact of unequal appearing probability of service retrieval request parameters and service input parameters on service retrieval and service addition operations. The least-used key selection method has been proposed to narrow the search scope of service retrieval and reduce its time. The experimental results show that the proposed least-used key selection method improves the service retrieval efficiency significantly compared with the designated key selection method in the case of the unequal appearing probability of parameters in service retrieval requests under three indexing models.</p

    A unified graph model based on molecular data binning for disease subtyping.

    No full text
    Molecular disease subtype discovery from omics data is an important research problem in precision medicine.The biggest challenges are the skewed distribution and data variability in the measurements of omics data. These challenges complicate the efficient identification of molecular disease subtypes defined by clinical differences, such as survival. Existing approaches adopt kernels to construct patient similarity graphs from each view through pairwise matching. However, the distance functions used in kernels are unable to utilize the potentially critical information of extreme values and data variability which leads to the lack of robustness. In this paper, a novel robust distance metric (ROMDEX) is proposed to construct similarity graphs for molecular disease subtypes from omics data, which is able to address the data variability and extreme values challenges. The proposed approach is validated on multiple TCGA cancer datasets, and the results are compared with multiple baseline disease subtyping methods. The evaluation of results is based on Kaplan-Meier survival time analysis, which is validated using statistical tests e.g, Cox-proportional hazard (Cox p-value). We reject the null hypothesis that the cohorts have the same hazard, for the P-values less than 0.05. The proposed approach achieved best P-values of 0.00181, 0.00171, and 0.00758 for Gene Expression, DNA Methylation, and MicroRNA data respectively, which shows significant difference in survival between the cohorts. In the results, the proposed approach outperformed the existing state-of-the-art (MRGC, PINS, SNF, Consensus Clustering and Icluster+) disease subtyping approaches on various individual disease views of multiple TCGA datasets
    corecore